Generative AI in the Enterprise: A Sober Take on What's Working in 2026

Here is the honest picture of generative AI in the enterprise in 2026. Seventy-one percent of organizations regularly use it. More than 80 percent report no measurable impact on enterprise-level EBIT. Workers who use it daily save an average of 5.4 percent of their work hours weekly. Ninety-two percent of daily users report productivity gains. And according to Deloitte's 2026 State of AI in the Enterprise report, only 34 percent of organizations are truly reimagining their businesses with AI rather than using it to optimize what already exists.

Both things are true simultaneously. Generative AI is producing real, measurable productivity gains for individuals who use it regularly. And it is failing to produce enterprise-level financial outcomes for the majority of organizations that have invested in it. Those are not contradictory findings. They describe the same underlying problem: individual productivity gains are not automatically aggregating to business-level results, and most organizations have not built the conditions under which they would.

A sober assessment of what is working in 2026 requires holding both of these things at once and being honest about what the evidence actually shows, rather than defaulting to either the optimistic narrative that AI is transforming everything or the skeptical narrative that it is mostly hype.

What Is Genuinely Working

Software Development and Engineering

Coding is the breakout use case in enterprise generative AI, and the evidence for it is more robust than for almost any other category. Menlo Ventures' 2025 State of Generative AI in the Enterprise report, based on a survey of approximately 500 US enterprise decision-makers, found that coding captured $4 billion of the $7.3 billion in departmental AI spending in 2025, making it the single largest category across the entire application layer. Engineering teams account for the vast majority of enterprise AI spend, and the productivity gains are among the most consistently documented in the research.

The reason coding works so well as a generative AI use case is structural. The output is verifiable: code either runs or it does not, passes tests or fails them. The feedback loop is tight. The workflow was already digital and tool-mediated. The gain from AI assistance, faster generation of boilerplate, more efficient debugging, better documentation, quicker context-switching between codebases, is immediately visible and measurable in developer velocity. The last-mile problem that undermines other use cases, where individual gains fail to aggregate to process-level outcomes, is less severe in software development because the output of an individual developer's work is directly integrated into a team's delivery pipeline.

Organizations that have deployed AI coding assistants and measured the results consistently report meaningful throughput improvements. The gains are real enough that the category has attracted sustained enterprise investment and produced multiple products generating over $100 million in annual recurring revenue within two years of launch.

Content and Document Workflows at High Volume

The second category where generative AI is producing consistent, measurable results is high-volume content and document workflows: drafting, summarizing, extracting, formatting, and reviewing documents across functions. Legal contract review. Financial report generation. Customer communications. Internal knowledge summarization. Policy documentation.

The common thread is volume. Generative AI does not dramatically accelerate a task that happens once a week for one person. It dramatically accelerates a task that happens a hundred times a day across a team, because the time saving per instance multiplies across the population of instances. When Quilter, the UK wealth manager, estimates Microsoft 365 Copilot will save more than 13,000 hours per month of post-call documentation time for its highest-cost staff, the scale of the saving comes from a high-frequency task being compressed across a large population of users, not from a single dramatic transformation.

The organizations capturing this value have identified the specific high-frequency document workflows where generative AI assistance produces consistent quality output and have redesigned those workflows to incorporate AI as a default step rather than an optional tool. The ones that have simply made AI tools available and hoped employees would figure out how to use them are capturing a fraction of the available value.

Customer-Facing Service at Scale

Customer service and support is the third category where documented results exist and the evidence is credible. ServiceNow reported that 89 percent of its own customer self-service requests were supported by AI in 2025, saving employees over 2.3 million hours. Cisco projects that 56 percent of customer support interactions will involve agentic AI by mid-2026. The gains in this category come from a combination of generative AI for response drafting and summarization, and agentic AI for autonomous resolution of structured requests.

The conditions that make customer service work as a generative AI use case are similar to the conditions that make coding work: high volume, structured knowledge base, verifiable output quality, and a tight feedback loop between the AI's response and the customer's resolution rate. Contact center operations were also already heavily instrumented for measurement before AI arrived, which means the before-and-after comparison is cleaner than in functions where measurement was sparse.

What Is Not Working as Advertised

Enterprise-Wide Productivity Transformation

The headline claim for generative AI in 2023 and 2024 was that it would transform enterprise productivity broadly, across all knowledge work functions simultaneously. That claim has not materialized at the enterprise level, even though it has materialized at the individual level for daily users.

The mechanism of the gap is the one described in previous research: individual productivity gains require process-level redesign to aggregate to business-level financial outcomes. A finance analyst who summarizes reports 40 percent faster with AI assistance is not producing 40 percent more financial value unless the time freed is redirected to higher-value work that the analyst's function can actually absorb. If the time saving disappears into slightly longer lunch breaks or slightly more comfortable task completion, the productivity gain is real in a narrow sense and invisible in the P&L.

Deloitte's 2026 report is direct about this: worker access to AI rose 50 percent in 2025, yet only 34 percent of business leaders are genuinely reimagining operations around AI. The remaining 66 percent are deploying tools without changing the processes and organizational structures that determine whether those tools produce business value. Tool adoption is not operational transformation. Most organizations have achieved the former without building the latter.

Creative and Strategic Work

Generative AI has proven less transformative in genuinely creative and strategic work than the early claims suggested. It is useful as a drafting aid, a brainstorming partner, and a starting point for analysis. It is not a reliable substitute for experienced strategic judgment, nuanced stakeholder communication, or creative work that requires contextual depth and original synthesis.

The organizations that have deployed generative AI into high-stakes strategic and creative workflows without establishing quality review processes have in some cases created new risks: hallucinated facts in financial analysis, confidently wrong legal summaries, generic strategic recommendations that do not account for organizational context. The output fluency of large language models creates a credibility that the accuracy of the output does not always warrant, which is a specific failure mode for senior-level work where the cost of a wrong answer is high.

The practical implication is not that generative AI should not be used in strategic work. It is that the workflow design for strategic work needs to treat AI output as a draft requiring expert review rather than as a finished product requiring only approval. That distinction is not always built into deployment decisions, particularly when the deployment is being driven by enthusiasm for the technology rather than careful analysis of where it adds genuine value.

Regulated Industry Deployment at Scale

Financial services, healthcare, and other regulated industries have faced specific barriers to generative AI deployment that have slowed their progress relative to the enterprise average. Data sovereignty requirements restrict what can be sent to external model providers. Explainability requirements for decisions that affect customers create governance challenges that standard generative AI systems do not natively satisfy. Hallucination rates that are acceptable for internal productivity tools are not acceptable for customer-facing outputs in environments where accuracy is a regulatory requirement.

These industries are not absent from the generative AI deployment picture, but they are concentrated in internal productivity use cases rather than customer-facing or decision-critical applications. The customer-facing deployment in regulated industries is happening, but at a slower pace, with more governance overhead, and with more conservative scope than comparable deployments in less regulated environments.

The Pattern That Separates Results from Activity

Across the use cases where generative AI is producing documented enterprise-level results, a consistent pattern separates organizations capturing value from those generating activity without outcomes.

The organizations capturing value started with a specific business problem, identified a specific workflow where generative AI could address it, redesigned that workflow rather than layering AI on top of the existing one, measured the outcome against a business metric rather than an AI metric, and used the evidence of that outcome to fund the next initiative. They treated generative AI as a workflow transformation capability rather than a technology to deploy and hope produces results.

The organizations generating activity without outcomes started with the technology, deployed it broadly, measured adoption rates and user satisfaction, and reported on AI usage rather than AI impact. They provided tools without redesigning the processes those tools were supposed to improve. They measured the input rather than the output.

MIT's research found that organizations that buy generative AI from specialized vendors rather than building internally succeed at roughly double the rate of those that build. The mechanism is not that vendor solutions are technically superior. It is that purchasing a specialized solution forces the organization to define what problem the solution is solving before the money is spent, while internal builds can proceed indefinitely without that definition being made explicit.

What 2026 Actually Looks Like for the Median Enterprise

The median large enterprise in 2026 has active generative AI tools deployed across several functions. A meaningful portion of its knowledge workers use those tools regularly and report productivity gains. Its AI program has not yet produced a measurable impact on enterprise-level EBIT. Its leadership is under increasing board pressure to demonstrate ROI. Its AI team is expanding the portfolio of use cases while its finance function is questioning the return on the investment already made.

That is not a failure state. It is a transition state. The organizations that move through it successfully are the ones that use the current moment to stop expanding the portfolio of activity and start concentrating investment on the subset of use cases where the conditions for enterprise-level value creation actually exist: clear business problem, redesigned workflow, measurable business outcome, named owner. The ones that continue expanding activity without addressing the conditions that translate activity into outcomes will find themselves in the same position twelve months from now, with a larger AI budget and a similar P&L story.

Generative AI will produce transformative enterprise outcomes for the organizations that build the conditions for those outcomes. The technology is capable of it. Most organizations have not yet built those conditions. That is the honest state of play in 2026, and it is more useful than either the optimist's version or the skeptic's.

Talk to Us

ClarityArc helps organizations move from generative AI activity to generative AI outcomes, by identifying the use cases with genuine enterprise-level return potential and building the workflow and governance conditions that allow those use cases to scale. If your AI program is generating more activity than results, we are ready to help you close the gap.

Get in Touch
Previous
Previous

Value Streams: The Missing Layer Between Strategy and Process

Next
Next

Tying Data Spend to a P&L Line: The Conversation That Unlocks Funding